A Comparative Study of RPCL and MCE Based Discriminative Training Methods for LVCSR

نویسندگان

  • Zaihu Pang
  • Xihong Wu
  • Lei Xu
چکیده

This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focuses on avoiding misclassification through enforcing the learning of correct class and delearning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at the levels of phones and/or hidden Markov states using the same training corpus. The results show that both the MCE and RPCL based methods perform better than the Maximum Likelihood Estimation (MLE) based method. Comparing with the MCE based method, the RPCL based methods have better discriminative and generalizing abilities on both two levels. & 2014 Elsevier B.V. All rights reserved.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Discriminative training of GMM-HMM acoustic model by RPCL learning

This paper presents a new discriminative approach for training Gaussian mixture models (GMMs) of hidden Markov models (HMMs) based acoustic model in a large vocabulary continuous speech recognition (LVCSR) system. This approach is featured by embedding a rival penalized competitive learning (RPCL) mechanism on the level of hidden Markov states. For every input, the correct identity state, calle...

متن کامل

Discriminative training of GMM-HMM acoustic model by RPCL type Bayesian Ying-Yang harmony learning

This paper presents a new discriminative approach for training Gaussian mixture models (GMMs) of hidden Markov models (HMMs) based acoustic model in a large vocabulary continuous speech recognition (LVCSR) system. This approach is featured by embedding a rival penalized competitive learning (RPCL) mechanism on the level of hidden Markov states. For every input, the correct identity state, calle...

متن کامل

Large-Margin Gaussian Mixture Modeling for Automatic Speech Recognition

Discriminative training for acoustic models has been widely studied to improve the performance of automatic speech recognition systems. To enhance the generalization ability of discriminatively trained models, a large-margin training framework has recently been proposed. This work investigates large-margin training in detail, integrates the training with more flexible classifier structures such...

متن کامل

Phone-discriminating minimum classification error (p-MCE) training for phonetic recognition

In this paper, we report a study on performance comparisons of discriminative training methods for phone recognition using the TIMIT database. We propose a new method of phonediscriminating minimum classification error (P-MCE), which performs MCE training at the sub-string or phone level instead of at the traditional string level. Aiming at minimizing the phone recognition error rate, P-MCE nev...

متن کامل

Selective MCE training strategy in Mandarin speech recognition

The use of discriminative training methods in speech recognition is a promising approach. The minimum classification error (MCE) based discriminative methods have been extensively studied and successfully applied to speech recognition [1][2][3], speaker recognition [4], and utterance verification [5][6]. Our goal is to modify the embedded string model based MCE algorithm to train a large number...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • Neurocomputing

دوره 134  شماره 

صفحات  -

تاریخ انتشار 2011